52 research outputs found

    NEUROEVOLUTION AND AN APPLICATION OF AN AGENT BASED MODEL FOR FINANCIAL MARKET

    Full text link
    Market prediction is one of the most difficult problems for the machine learning community. Even though, successful trading strategies can be found for the training data using various optimization methods, these strategies usually do not perform well on the test data as expected. Therefore, selection of the correct strategy becomes problematic. In this study, we propose an evolutionary algorithm that produces a variation of trader agents ensuring that the trading strategies they use are different. We discuss that because the selection of the correct strategy is difficult, a variety of agents can be used simultaneously in order to reduce risk. We simulate trader agents on real market data and attempt to optimize their actions. Agent decisions are based on Echo State Networks. The agents take various market indicators as inputs and produce an action such as: buy or sell. We optimize the parameters of the echo state networks using evolutionary algorithms

    Evolving generalist controllers to handle a wide range of morphological variations

    Full text link
    Neuro-evolutionary methods have proven effective in addressing a wide range of tasks. However, the study of the robustness and generalisability of evolved artificial neural networks (ANNs) has remained limited. This has immense implications in the fields like robotics where such controllers are used in control tasks. Unexpected morphological or environmental changes during operation can risk failure if the ANN controllers are unable to handle these changes. This paper proposes an algorithm that aims to enhance the robustness and generalisability of the controllers. This is achieved by introducing morphological variations during the evolutionary process. As a results, it is possible to discover generalist controllers that can handle a wide range of morphological variations sufficiently without the need of the information regarding their morphologies or adaptation of their parameters. We perform an extensive experimental analysis on simulation that demonstrates the trade-off between specialist and generalist controllers. The results show that generalists are able to control a range of morphological variations with a cost of underperforming on a specific morphology relative to a specialist. This research contributes to the field by addressing the limited understanding of robustness and generalisability in neuro-evolutionary methods and proposes a method by which to improve these properties

    Learning with Delayed Synaptic Plasticity

    Get PDF
    The plasticity property of biological neural networks allows them to perform learning and optimize their behavior by changing their configuration. Inspired by biology, plasticity can be modeled in artificial neural networks by using Hebbian learning rules, i.e. rules that update synapses based on the neuron activations and reinforcement signals. However, the distal reward problem arises when the reinforcement signals are not available immediately after each network output to associate the neuron activations that contributed to receiving the reinforcement signal. In this work, we extend Hebbian plasticity rules to allow learning in distal reward cases. We propose the use of neuron activation traces (NATs) to provide additional data storage in each synapse to keep track of the activation of the neurons. Delayed reinforcement signals are provided after each episode relative to the networks' performance during the previous episode. We employ genetic algorithms to evolve delayed synaptic plasticity (DSP) rules and perform synaptic updates based on NATs and delayed reinforcement signals. We compare DSP with an analogous hill climbing algorithm that does not incorporate domain knowledge introduced with the NATs, and show that the synaptic updates performed by the DSP rules demonstrate more effective training performance relative to the HC algorithm.Comment: GECCO201

    Limited Evaluation Cooperative Co-evolutionary Differential Evolution for Large-scale Neuroevolution

    Get PDF
    Many real-world control and classification tasks involve a large number of features. When artificial neural networks (ANNs) are used for modeling these tasks, the network architectures tend to be large. Neuroevolution is an effective approach for optimizing ANNs; however, there are two bottlenecks that make their application challenging in case of high-dimensional networks using direct encoding. First, classic evolutionary algorithms tend not to scale well for searching large parameter spaces; second, the network evaluation over a large number of training instances is in general time-consuming. In this work, we propose an approach called the Limited Evaluation Cooperative Co-evolutionary Differential Evolution algorithm (LECCDE) to optimize high-dimensional ANNs. The proposed method aims to optimize the pre-synaptic weights of each post-synaptic neuron in different subpopulations using a Cooperative Co-evolutionary Differential Evolution algorithm, and employs a limited evaluation scheme where fitness evaluation is performed on a relatively small number of training instances based on fitness inheritance. We test LECCDE on three datasets with various sizes, and our results show that cooperative co-evolution significantly improves the test error comparing to standard Differential Evolution, while the limited evaluation scheme facilitates a significant reduction in computing time

    Improving (1+1) Covariance Matrix Adaptation Evolution Strategy: a simple yet efficient approach

    Get PDF
    The file attached to this record is the author's final peer reviewed versionIn recent years, part of the meta-heuristic optimisation research community has called for a simplification of the algorithmic design: indeed, while most of the state-of-the-art algorithms are characterised by a high level of complexity, complex algorithms are hard to understand and therefore tune for specific real-world applications. Here, we follow this reductionist approach by combining straightforwardly two methods recently proposed in the literature, namely the Re-sampling Inheritance Search (RIS) and the (1+1) Covariance Matrix Adaptation Evolution Strategy (CMA-ES). We propose an RI-(1+1)-CMA-ES algorithm that on the one hand improves upon the original (1+1)-CMA-ES, on the other it keeps the original spirit of simplicity at the basis of RIS. We show with an extensive experimental campaign that the proposed algorithm efficiently solves a large number of benchmark functions and is competitive with several modern optimisation algorithms much more complex in terms of algorithmic design

    Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions

    Full text link
    A fundamental aspect of learning in biological neural networks is the plasticity property which allows them to modify their configurations during their lifetime. Hebbian learning is a biologically plausible mechanism for modeling the plasticity property in artificial neural networks (ANNs), based on the local interactions of neurons. However, the emergence of a coherent global learning behavior from local Hebbian plasticity rules is not very well understood. The goal of this work is to discover interpretable local Hebbian learning rules that can provide autonomous global learning. To achieve this, we use a discrete representation to encode the learning rules in a finite search space. These rules are then used to perform synaptic changes, based on the local interactions of the neurons. We employ genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings. The resulting evolved rules converged into a set of well-defined interpretable types, that are thoroughly discussed. Notably, the performance of these rules, while adapting the ANNs during the learning tasks, is comparable to that of offline learning methods such as hill climbing.Comment: Evolutionary Computation Journa

    A comparison of three Differential Evolution strategies in terms of early convergence with different population sizes

    Get PDF
    The file attached to this record is the author's final peer reviewed version.Differential Evolution (DE) is a popular population-based continuous optimization algorithm that generates new candidate solutions by perturbing the existing ones, using scaled differences of randomly selected solutions in the population. While the number of generation increases, the differences between the solutions in the population decrease and the population tends to converge to a small hyper-volume within the search space. When these differences become too small, the evolutionary process becomes inefficient as no further improvements on the fitness value can be made - unless specific mechanisms for diversity preservation or restart are implemented. In this work, we present a set of preliminary results on measuring the population diversity during the DE process, to investigate how different DE strategies and population sizes can lead to early convergence. In particular, we compare two standard DE strategies, namely “DE/rand/1/bin” and “DE/rand/1/exp”, and a rotation-invariant strategy, “DE/current-to-random/1”, with populations of 10, 30, 50, 100, 200 solutions. Our results show, quite intuitively, that the lower is the population size, the higher is the chance of observing early convergence. Furthermore, the comparison of the different strategies shows that “DE/rand/1/exp” preserves the population diversity the most, whereas “DE/current-to-random/1” preserves diversity the least

    Evolutionary Algorithm Based Approach for Modeling Autonomously Trading Agents

    Full text link
    The autonomously trading agents described in this paper produce a decision to act such as: buy, sell or hold, based on the input data. In this work, we have simulated autonomously trading agents using the Echo State Network (ESNs) model. We generate a collection of trading agents that use different trading strategies using Evolutionary Programming (EP). The agents are tested on EUR/ USD real market data. The main goal of this study is to test the overall performance of this collection of agents when they are active simultaneously. Simulation results show that using different agents concurrently outperform a single agent acting alone
    • …
    corecore